Convergence and Error Bound
نویسنده
چکیده
منابع مشابه
Convergence of Legendre wavelet collocation method for solving nonlinear Stratonovich Volterra integral equations
In this paper, we apply Legendre wavelet collocation method to obtain the approximate solution of nonlinear Stratonovich Volterra integral equations. The main advantage of this method is that Legendre wavelet has orthogonality property and therefore coefficients of expansion are easily calculated. By using this method, the solution of nonlinear Stratonovich Volterra integral equation reduces to...
متن کاملA Sharp Sufficient Condition for Sparsity Pattern Recovery
Sufficient number of linear and noisy measurements for exact and approximate sparsity pattern/support set recovery in the high dimensional setting is derived. Although this problem as been addressed in the recent literature, there is still considerable gaps between those results and the exact limits of the perfect support set recovery. To reduce this gap, in this paper, the sufficient con...
متن کاملAdaptive SVRG Methods under Error Bound Conditions with Unknown Growth Parameter
Error bound, an inherent property of an optimization problem, has recently revived in the development of algorithms with improved global convergence without strong convexity. The most studied error bound is the quadratic error bound, which generalizes strong convexity and is satisfied by a large family of machine learning problems. Quadratic error bound have been leveraged to achieve linear con...
متن کاملGlobal convergence of an inexact interior-point method for convex quadratic symmetric cone programming
In this paper, we propose a feasible interior-point method for convex quadratic programming over symmetric cones. The proposed algorithm relaxes the accuracy requirements in the solution of the Newton equation system, by using an inexact Newton direction. Furthermore, we obtain an acceptable level of error in the inexact algorithm on convex quadratic symmetric cone programmin...
متن کاملModified frame algorithm and its convergence acceleration by Chebyshev method
The aim of this paper is to improve the convergence rate of frame algorithm based on Richardson iteration and Chebyshev methods. Based on Richardson iteration method, we first square the existing convergence rate of frame algorithm which in turn the number of iterations would be bisected and increased speed of convergence is achieved. Afterward, by using Chebyshev polynomials, we improve this s...
متن کاملNew Analysis of Linear Convergence of Gradient-type Methods via Unifying Error Bound Conditions
The subject of linear convergence of gradient-type methods on non-strongly convex optimization has been widely studied by introducing several notions as sufficient conditions. Influential examples include the error bound property, the restricted strongly convex property, the quadratic growth property, and the KurdykaLojasiewicz property. In this paper, we first define a group of error bound con...
متن کامل